269 research outputs found

    Combining Static and Dynamic Analysis for Vulnerability Detection

    Full text link
    In this paper, we present a hybrid approach for buffer overflow detection in C code. The approach makes use of static and dynamic analysis of the application under investigation. The static part consists in calculating taint dependency sequences (TDS) between user controlled inputs and vulnerable statements. This process is akin to program slice of interest to calculate tainted data- and control-flow path which exhibits the dependence between tainted program inputs and vulnerable statements in the code. The dynamic part consists of executing the program along TDSs to trigger the vulnerability by generating suitable inputs. We use genetic algorithm to generate inputs. We propose a fitness function that approximates the program behavior (control flow) based on the frequencies of the statements along TDSs. This runtime aspect makes the approach faster and accurate. We provide experimental results on the Verisec benchmark to validate our approach.Comment: There are 15 pages with 1 figur

    Draft genome sequence of [i]Pseudomonas[/i] sp. strain ADP, a bacterial model for studying the degradation of the herbicide atrazine

    No full text
    EAPôle BIOME & IPMWe report here the 7,259,392-bp draft genome of [i]Pseudomonas[/i] sp. strain ADP. This is a bacterial strain that was first isolated in the 1990s from soil for its ability to mineralize the herbicide atrazine. It has extensively been studied as a model to understand the atrazine biodegradation pathway. This genome will be used as a reference and compared to evolved populations obtained by experimental evolution conducted on this strain under atrazine selection pressure. Copyright 2016 Devers-Lamrani et al

    Lazart: A Symbolic Approach for Evaluation the Robustness of Secured Codes against Control Flow Injections

    No full text
    International audienceIn the domain of smart cards, secured devices must be protected against high level attack potential [1]. According to norms such as the Common Criteria [2], the vulnerability analysis must cover the current state-of-the-art in term of attacks. Nowadays, a very classical type of attack is fault injection, conducted by means of laser based techniques. We propose a global approach, called Lazart, to evaluate code robustness against fault injections targeting control flow modifications. The originality of Lazart is twofolds. First, we encompass the evaluation process as a whole: starting from a fault model, we produce (or establish the absence of) attacks, taking into consideration software countermeasures. Furthermore, according to the near state-of-the-art, our methodology takes into account multiple transient fault injections and their combinatory. The proposed approach is supported by an effective tool suite based on the LLVM format [3] and the KLEE symbolic test generator [4]

    Specification and Verification of various Distributed Leader Election Algorithm for Unidirectional Ring Networks

    Get PDF
    This report deals with the formal specification and verification of distributed leader election algorithms for a set of machines connected by a unidirectional ring network. Starting from an algorithm proposed by Le~Lann in 1977, and its variant proposed by Chang and Roberts in 1979, we study the robustness of this class of algorithms in presence of unreliable communication medium and/or unreliable machines. We suggest various improvements of these algorithms in order to obtain a fully fault-tolerant protocol. These algorithms are formally described using the ISO specification language LOTOS and verified (for a fixed number of machines) using the CADP (CÆSAR/ALDEBARAN) toolbox. Model-checking and bisimulation techniques allow the verification of these non-trivial algorithms to be carried out automatically

    BAXMC: a CEGAR approach to Max\#SAT

    Full text link
    Max\#SAT is an important problem with multiple applications in security and program synthesis that is proven hard to solve. It is defined as: given a parameterized quantifier-free propositional formula compute parameters such that the number of models of the formula is maximal. As an extension, the formula can include an existential prefix. We propose a CEGAR-based algorithm and refinements thereof, based on either exact or approximate model counting, and prove its correctness in both cases. Our experiments show that this algorithm has much better effective complexity than the state of the art.Comment: FMCAD 2022, Oct 2022, Trente, Ital

    Get rid of inline assembly through verification-oriented lifting

    Full text link
    Formal methods for software development have made great strides in the last two decades, to the point that their application in safety-critical embedded software is an undeniable success. Their extension to non-critical software is one of the notable forthcoming challenges. For example, C programmers regularly use inline assembly for low-level optimizations and system primitives. This usually results in driving state-of-the-art formal analyzers developed for C ineffective. We thus propose TInA, an automated, generic, trustable and verification-oriented lifting technique turning inline assembly into semantically equivalent C code, in order to take advantage of existing C analyzers. Extensive experiments on real-world C code with inline assembly (including GMP and ffmpeg) show the feasibility and benefits of TInA

    Interface Compliance of Inline Assembly: Automatically Check, Patch and Refine

    Full text link
    Inline assembly is still a common practice in low-level C programming, typically for efficiency reasons or for accessing specific hardware resources. Such embedded assembly codes in the GNU syntax (supported by major compilers such as GCC, Clang and ICC) have an interface specifying how the assembly codes interact with the C environment. For simplicity reasons, the compiler treats GNU inline assembly codes as blackboxes and relies only on their interface to correctly glue them into the compiled C code. Therefore, the adequacy between the assembly chunk and its interface (named compliance) is of primary importance, as such compliance issues can lead to subtle and hard-to-find bugs. We propose RUSTInA, the first automated technique for formally checking inline assembly compliance, with the extra ability to propose (proven) patches and (optimization) refinements in certain cases. RUSTInA is based on an original formalization of the inline assembly compliance problem together with novel dedicated algorithms. Our prototype has been evaluated on 202 Debian packages with inline assembly (2656 chunks), finding 2183 issues in 85 packages -- 986 significant issues in 54 packages (including major projects such as ffmpeg or ALSA), and proposing patches for 92% of them. Currently, 38 patches have already been accepted (solving 156 significant issues), with positive feedback from development teams

    What can you verify and Enforce at Runtime?

    Get PDF
    International audienceThe underlying property, its definition and representation play a major role when monitoring a system. Having a suitable and convenient framework to express properties is thus a concern for runtime analysis. It is desirable to delineate in this framework the sets of properties for which runtime analysis approaches can be applied to. This paper presents a unified view of runtime verification and enforcement of properties in the Safety-Progress classification. Firstly, we extend the Safety-Progress classification of properties in a runtime context. Secondly, we characterize the set of properties which can be verified (monitorable properties) and enforced (enforceable properties) at runtime. We propose in particular an alternative definition of ''property monitoring'' to the one classically used in this context. Finally, for the delineated sets of properties, we define specialized verification and enforcement monitors

    Leveraging Concepts in Open Access Publications

    Get PDF
    International audienceAim: This paper addresses the integration of a Named Entity Recognition and Disambiguation (NERD) service within a group of open access (OA) publishing digital platforms and considers its potential impact on both research and scholarly publishing. This application, called entity-fishing, was initially developed by Inria in the context of the EU FP7 project CENDARI (Lopez et al., 2014) and provides automatic entity recognition and disambiguation against Wikipedia and Wikidata. Distributed with an open-source licence, it was deployed as a web service in the DARIAH infrastructure hosted at the French HumaNum. Methods: In this paper, we focus on the specific issues related to its integration on five OA platforms specialized in the publication of scholarly monographs in social sciences and humanities as part of the work carried out within the EU H2020 project HIRMEOS (High Integration of Research Monographs in the European Open Science infrastructure). Results and Discussion: In the following sections, we give a brief overview of the current status and evolution of OA publications and how HIRMEOS aims to contribute to this. We then give a comprehensive description of the entity-fishing service, focusing on its concrete applications in real use cases together with some further possible ideas on how to exploit the generated annotations. Conclusions: We show that entity-fishing annotations can improve both research and publishing process. Entity-fishing annotations can be used to achieve a better and quicker understanding of the specific and disciplinary language of certain monographs and so encourage non-specialists to use them. In addition, a systematic implementation of the entity-fishing service can be used by publishers to generate thematic indexes within book collections to allow better cross-linking and query functions
    corecore